Goto

Collaborating Authors

 unsupervised and semi-supervised learning


Iterative Double Clustering for Unsupervised and Semi-Supervised Learning

Neural Information Processing Systems

We present a powerful meta-clustering technique called Iterative Dou- ble Clustering (IDC). The IDC method is a natural extension of the recent Double Clustering (DC) method of Slonim and Tishby that ex- hibited impressive performance on text categorization tasks [12]. Us- ing synthetically generated data we empirically flnd that whenever the DC procedure is successful in recovering some of the structure hidden in the data, the extended IDC procedure can incrementally compute a signiflcantly more accurate classiflcation. IDC is especially advan- tageous when the data exhibits high attribute noise. Our simulation results also show the efiectiveness of IDC in text categorization prob- lems.


Supervised and Unsupervised Learning for Data Science (Unsupervised and Semi-Supervised Learning): Berry, Michael W., Mohamed, Azlinah, Yap, Bee Wah: 9783030224776: Amazon.com: Books

#artificialintelligence

Professor Michael W. Berry is a Full Professor in the Departments of Electrical Engineering and Computer Science (EECS) and Mathematics at the University of Tennessee, Knoxville. He served as Interim Department Head of Computer Science from January 2004 to June 2007, and as Associate Head in the Department of Electrical Engineering and Computer Science from July 2007 to July 2012. He worked in the Communications Product Division of IBM in Raleigh, NC for about 1 year before accepting a research staff position in the Center for Supercomputing Research and Development at the University of Illinois at Urbana-Champaign. In 1990, he received a PhD in Computer Science from the University of Illinois at Urbana-Champaign. He has published well over 150 peer-refereed journal and conference publications and book chapters.


Generalization Bounds For Unsupervised and Semi-Supervised Learning With Autoencoders

Epstein, Baruch, Meir, Ron

arXiv.org Machine Learning

Autoencoders are widely used for unsupervised learning and as a regularization scheme in semi-supervised learning. However, theoretical understanding of their generalization properties and of the manner in which they can assist supervised learning has been lacking. We utilize recent advances in the theory of deep learning generalization, together with a novel reconstruction loss, to provide generalization bounds for autoencoders. To the best of our knowledge, this is the first such bound. We further show that, under appropriate assumptions, an autoencoder with good generalization properties can improve any semi-supervised learning scheme. We support our theoretical results with empirical demonstrations.